AAAI.2023 - New Faculty Highlights

Total: 35

#1 Safety Validation of Learning-Based Autonomous Systems: A Multi-Fidelity Approach [PDF] [Copy] [Kimi]

Author: Ali Baheri

In recent years, learning-based autonomous systems have emerged as a promising tool for automating many crucial tasks. The key question is how we can build trust in such systems for safety-critical applications. My research aims to focus on the creation and validation of safety frameworks that leverage multiple sources of information. The ultimate goal is to establish a solid foundation for a long-term research program aimed at understanding the role of fidelity in simulators for safety validation and robot learning.

#2 Probabilistic Reasoning and Learning for Trustworthy AI [PDF] [Copy] [Kimi]

Author: YooJung Choi

As automated decision-making systems are increasingly deployed in areas with personal and societal impacts, there is a growing demand for artificial intelligence and machine learning systems that are fair, robust, interpretable, and generally trustworthy. Ideally we would wish to answer questions regarding these properties and provide guarantees about any automated system to be deployed in the real world. This raises the need for a unified language and framework under which we can reason about and develop trustworthy AI systems. This talk will discuss how tractable probabilistic reasoning and learning provides such framework. It is important to note that guarantees regarding fairness, robustness, etc., hold with respect to the distribution of the world in which the decision-making system operates. For example, to see whether automated loan decisions are biased against certain gender, one may compare the average decision for each gender; this requires knowledge of how the features used in the decision are distributed for each gender. Moreover, there are inherent uncertainties in modeling this distribution, in addition to the uncertainties when deploying a system in the real world, such as missing or noisy information. We can handle such uncertainties in a principled way through probabilistic reasoning. Taking fairness-aware learning as an example, we can deal with biased labels in the training data by explicitly modeling the observed labels as being generated from some probabilistic process that injects bias/noise to hidden, fair labels, particularly in a way that best explains the observed data. A key challenge that still needs to be addressed is that: we need models that can closely fit complex real-world distributions—i.e. expressive—while also being amenable to exact and efficient inference of probabilistic queries—i.e. tractable. I will show that probabilistic circuits, a family of tractable probabilistic models, offer both such benefits. In order to ultimately develop a common framework to study various areas of trustworthy AI (e.g., privacy, fairness, explanations, etc.), we need models that can flexibly answer different questions, even the ones it did not foresee. This talk will thus survey the efforts to expand the horizon of complex reasoning capabilities of probabilistic circuits, especially highlighted by a modular approach that answers various queries via a pipeline of a handful of simple tractable operations.

#3 The Automatic Computer Scientist [PDF] [Copy] [Kimi]

Author: Andrew Cropper

Algorithms are ubiquitous: they track our sleep, help us find cheap flights, and even help us see black holes. However, designing novel algorithms is extremely difficult, and we do not have efficient algorithms for many fundamental problems. The goal of my research is to accelerate algorithm discovery by building an automatic computer scientist. To work towards this goal, my research focuses on inductive logic programming, a form of machine learning in which my collaborators and I have demonstrated major advances in automated algorithm discovery over the past five years. In this talk and paper, I survey these advances.

#4 Perception for General-purpose Robot Manipulation [PDF] [Copy] [Kimi]

Author: Karthik Desingh

To autonomously perform tasks, a robot should continually perceive the state of its environment, reason with the task at hand, plan and execute appropriate actions. In this pipeline, perception is largely unsolved and one of the more challenging problems. Common indoor environments typically pose two main problems: 1) inherent occlusions leading to unreliable observations of objects, and 2) the presence and involvement of a wide range of objects with varying physical and visual attributes (i.e., rigid, articulated, deformable, granular, transparent, etc.). Thus, we need algorithms that can accommodate perceptual uncertainty in the state estimation and generalize to a wide range of objects. Probabilistic inference methods have been highly suitable for modeling perceptual uncertainty, and data-driven approaches using deep learning techniques have shown promising advancements toward generalization. Perception for manipulation is a more intricate setting requiring the best from both worlds. My research aims to develop robot perception algorithms that can generalize over objects and tasks while accommodating perceptual uncertainty to support robust task execution in the real world. In this presentation, I will briefly highlight my research in these two research threads.

#5 Cooperative Multi-Agent Learning in a Complex World: Challenges and Solutions [PDF] [Copy] [Kimi]

Author: Yali Du

Over the past few years, artificial intelligence (AI) has achieved great success in a variety of applications, such as image classification and recommendation systems. This success has often been achieved by training machine learning models on static datasets, where inputs and desired outputs are provided. However, we are now seeing a shift in this paradigm. Instead of learning from static datasets, machine learning models are increasingly being trained through feedback from their interactions with the world. This is particularly important when machine learning models are deployed in the real world, as their decisions can often have an impact on other agents, turning the decision-making process into a multi-agent problem. As a result, multi-agent learning in complex environments is a critical area of research for the next generation of AI, particularly in the context of cooperative tasks. Cooperative multi-agent learning is an essential problem for practitioners to consider as it has the potential to enable a wide range of multi-agent tasks. In this presentation, we will review the background and challenges of cooperative multi-agent learning, and survey our research that aims to address these challenges.

#6 Distributed Stochastic Nested Optimization for Emerging Machine Learning Models: Algorithm and Theory [PDF] [Copy] [Kimi]

Author: Hongchang Gao

Traditional machine learning models can be formulated as the expected risk minimization (ERM) problem: minw∈Rd Eξ [l(w; ξ)], where w ∈ Rd denotes the model parameter, ξ represents training samples, l(·) is the loss function. Numerous optimization algorithms, such as stochastic gradient descent (SGD), have been developed to solve the ERM problem. However, a wide range of emerging machine learning models are beyond this class of optimization problems, such as model-agnostic meta-learning (Finn, Abbeel, and Levine 2017). Of particular interest of my research is the stochastic nested optimization (SNO) problem, whose objective function has a nested structure. Specifically, I have been focusing on two instances of this kind of problem: stochastic compositional optimization (SCO) problems, which cover meta-learning, area-under-the-precision recall-curve optimization, contrastive self-supervised learning, etc., and stochastic bilevel optimization (SBO) problems, which can be applied to meta-learning, hyperparameter optimization, neural network architecture search, etc. With the emergence of large-scale distributed data, such as the user data generated on mobile devices or intelligent hardware, it is imperative to develop distributed optimization algorithms for SNO (Distributed SNO). A significant challenge for optimizing distributed SNO problems lies in that the stochastic (hyper-)gradient is a biased estimation of the full gradient. Thus, existing distributed optimization algorithms when applied to them suffer from slow convergence rates. In this talk, I will discuss my recent works about distributed SCO (Gao and Huang 2021; Gao, Li, and Huang 2022) and distributed SBO (Gao, Gu, and Thai 2022; Gao 2022) under both centralized and decentralized settings, including algorithmic details about reducing the bias of stochastic gradient, theoretical convergence rate, and practical machine learning applications, and then highlight challenges for future research.

#7 Targeted Knowledge Infusion To Make Conversational AI Explainable and Safe [PDF] [Copy] [Kimi]

Author: Manas Gaur

Conversational Systems (CSys) represent practical and tangible outcomes of advances in NLP and AI. CSys see continuous improvements through unsupervised training of large language models (LLMs) on a humongous amount of generic training data. However, when these CSys are suggested for use in domains like Mental Health, they fail to match the acceptable standards of clinical care, such as the clinical process in Patient Health Questionnaire (PHQ-9). The talk will present, Knowledge-infused Learning (KiL), a paradigm within NeuroSymbolic AI that focuses on making machine/deep learning models (i) learn over knowledge-enriched data, (ii) learn to follow guidelines in process-oriented tasks for safe and reasonable generation, and (iii) learn to leverage multiple contexts and stratified knowledge to yield user-level explanations. KiL established Knowledge-Intensive Language Understanding, a set of tasks for assessing safety, explainability, and conceptual flow in CSys.

#8 Accountability Layers: Explaining Complex System Failures by Parts [PDF] [Copy] [Kimi]

Author: Leilani H. Gilpin

With the rise of AI used for critical decision-making, many important predictions are made by complex and opaque AI algorithms. The aim of eXplainable Artificial Intelligence (XAI) is to make these opaque decision-making algorithms more transparent and trustworthy. This is often done by constructing an ``explainable model'' for a single modality or subsystem. However, this approach fails for complex systems that are made out of multiple parts. In this paper, I discuss how to explain complex system failures. I represent a complex machine as a hierarchical model of introspective sub-systems working together towards a common goal. The subsystems communicate in a common symbolic language. This work creates a set of explanatory accountability layers for trustworthy AI.

#9 Generative Decision Making Under Uncertainty [PDF] [Copy] [Kimi]

Author: Aditya Grover

In the fields of natural language processing (NLP) and computer vision (CV), recent advances in generative modeling have led to powerful machine learning systems that can effectively learn from large labeled and unlabeled datasets. These systems, by and large, apply a uniform pretrain-finetune pipeline on sequential data streams and have achieved state-of-the-art-performance across many tasks and benchmarks. In this talk, we will present recent algorithms that extend this paradigm to sequential decision making, by casting it as an inverse problem that can be solved via deep generative models. These generative approaches are stable to train, provide a flexible interface for single- and multi-task inference, and generalize exceedingly well outside their training datasets. We instantiate these algorithms in the context of reinforcement learning and black-box optimization. Empirically, we demonstrate that these approaches perform exceedingly well on high-dimensional benchmarks outperforming the current state-of-the-art approaches based on forward models.

#10 Food Information Engineering: A Systematic Literature Review [PDF] [Copy] [Kimi]

Author: Azanzi Jiomekong

In recent years, the research on food information gave rise to the food information engineering domain. The goal of this paper is to provide to the research community with a systematic literature review of methodologies, methods and tools used in this domain.

#11 Better Environments for Better AI [PDF] [Copy] [Kimi]

Author: Sarah Keren

Most past research aimed at increasing the capabilities of AI methods has focused exclusively on the AI agent itself, i.e., given some input, what are the improvements to the agent’s reasoning that will yield the best possible output. In my research, I take a novel approach to increasing the capabilities of AI agents via the design of the environments in which they are intended to act. My methods for automated design identify the inherent capabilities and limitations of AI agents with respect to their environment and find the best way to modify the environment to account for those limitations and maximize the agents’ performance. The future will bring an ever increasing set of interactions between people and automated agents, whether at home, at the workplace, on the road, or across many other everyday settings. Autonomous vehicles, robotic tools, medical devices, and smart homes, all allow ample opportunity for human-robot and multi-agent interactions. In these settings, recognizing what agents are trying to achieve, providing relevant assistance, and supporting an effective collaboration are essential tasks, and tasks that can all be enhanced via careful environment design. However, the increasing complexity of the systems we use and the environments in which we operate makes devising good design solutions extremely challenging. This stresses the importance of developing automated design tools to help determine the most effective ways to apply change and enable robust AI systems. My long-term goal is to provide theoretical foundations for designing AI systems that are capable of effective partnership in sustainable and efficient collaborations of automated agents as well as of automated agents and people.

#12 Recent Developments in Data-Driven Algorithms for Discrete Optimization [PDF] [Copy] [Kimi]

Author: Elias B. Khalil

The last few years have witnessed a renewed interest in “data-driven algorithm design” (Balcan 2020), the use of Machine Learning (ML) to tailor an algorithm to a distribution of instances. More than a decade ago, advances in algorithm configuration (Hoos 2011) paved the way for the use of historical data to modify an algorithm’s (typically fixed, static) parameters. In discrete optimization (e.g., satisfiability, integer programming, etc.), exact and inexact algorithms for NP-Hard problems often involve heuristic search decisions (Lodi 2013), abstracted as parameters, that can demonstrably benefit from tuning on historical instances from the application of interest. While useful, algorithm configuration may be insufficient: setting the parameters of an algorithm upfront of solving the input instance is still a static, high-level decision. In contrast, we have been exploring a suite of ML and Reinforcement Learning (RL) approaches that tune iterative optimization algorithms, such as branch-and-bound for integer programming or construction heuristics, at the iteration-level (Khalil et al. 2016, 2017; Dai et al. 2017; Chmiela et al. 2021; Gupta et al. 2022; Chi et al. 2022; Khalil, Vaezipoor, and Dilkina 2022; Khalil, Morris, and Lodi 2022; Alomrani, Moravej, and Khalil 2022; Cappart et al. 2021; Gupta et al. 2020). We will survey our most recent work in this area: 1. New methods for learning in MILP branch-and-bound (Gupta et al. 2020, 2022; Chmiela et al. 2021; Khalil, Vaezipoor, and Dilkina 2022; Khalil, Morris, and Lodi 2022); 2. RL for online combinatorial optimization and largescale linear programming (Alomrani, Moravej, and Khalil 2022; Chi et al. 2022); 3. Neural network approximations for stochastic programming (Dumouchelle et al. 2022).

#13 Advances in AI for Safety, Equity, and Well-Being on Web and Social Media: Detection, Robustness, Attribution, and Mitigation [PDF] [Copy] [Kimi]

Author: Srijan Kumar

In the talk, I shall describe my lab’s recent advances in AI, applied machine learning, and data mining to combat malicious actors (sockpuppets, ban evaders, etc.) and dangerous content (misinformation, hate, etc.) on web and social media platforms. My vision is to create a trustworthy online ecosystem for everyone and create the next generation of socially-aware methods that promote health, equity, and safety. Broadly, in my research, I have created novel graph, content (NLP, multimodality), and adversarial machine learning methods leveraging terabytes of data to detect, predict, and mitigate online threats. I shall describe the advancements made in my group across four key thrusts: (1) Detection of harmful content and malicious actors across platforms, languages, and modalities, (2) Robustifying detection models against adversarial actors by predicting future malicious activities, (3) Attributing the impact of harmful content and the role of recommender systems, and (4) Developing mitigation techniques to counter misinformation by professionals and the crowd.

#14 Intelligent Planning for Large-Scale Multi-Robot Coordination [PDF] [Copy] [Kimi]

Author: Jiaoyang Li

Robots will play a crucial role in the future and need to work as a team in increasingly more complex applications. Advances in robotics have laid the hardware foundations for building large-scale multi-robot systems. But how to coordinate robots intelligently is a difficult problem. We believe that graph-search-based planning can systematically exploit the combinatorial structure of multi-robot coordination problems and efficiently generate solutions with rigorous guarantees on correctness, completeness, and solution quality. We started with one problem that is central to many multi-robot applications. Multi-Agent Path Finding (MAPF) is an NP-hard problem of planning collision-free paths for a team of agents while minimizing their travel times. We addressed the MAPF problem from both (1) a theoretical perspective by developing efficient algorithms to solve large MAPF instances with completeness and optimality guarantees via a variety of AI and optimization technologies, such as constraint reasoning, heuristic search, stochastic local search, and machine learning, and (2) an applicational perspective by developing algorithmic techniques for integrating MAPF with task planning and execution for various multi-robot systems, such as mobile robot coordination, traffic management, drone swarm control, multi-arm assembly, and character control in video games. This paper is part of the AAAI-23 New Faculty Highlights.

#15 Robust and Adaptive Deep Learning via Bayesian Principles [PDF] [Copy] [Kimi]

Author: Yingzhen Li

Deep learning models have achieved tremendous successes in accurate predictions for computer vision, natural language processing and speech recognition applications. However, to succeed in high-risk and safety-critical domains such as healthcare and finance, these deep learning models need to be made reliable and trustworthy. Specifically, they need to be robust and adaptive to real-world environments which can be drastically different from the training settings. In this talk, I will advocate for Bayesian principles to achieve the goal of building robust and adaptive deep learning models. I will introduce a suite of uncertainty quantification methods for Bayesian deep learning, and demonstrate applications en- abled by accurate uncertainty estimates, e.g., robust predic- tion, continual learning and repairing model failures. I will conclude by discussing the research challenges and potential impact for robust and adaptive deep learning models. This paper is part of the AAAI-23 New Faculty Highlights.

#16 AAAI New Faculty Highlights: General and Scalable Optimization for Robust AI [PDF] [Copy] [Kimi]

Author: Sijia Liu

Deep neural networks (DNNs) can easily be manipulated (by an adversary) to output drastically different predictions and can be done so in a controlled and directed way. This process is known as adversarial attack and is considered one of the major hurdles in using DNNs in high-stakes and real-world applications. Although developing methods to secure DNNs against adversaries is now a primary research focus, it suffers from limitations such as lack of optimization generality and lack of optimization scalability. My research highlights will offer a holistic understanding of optimization foundations for robust AI, peer into their emerging challenges, and present recent solutions developed by my research group.

#17 Combining Runtime Monitoring and Machine Learning with Human Feedback [PDF] [Copy] [Kimi]

Author: Anna Lukina

State-of-the-art machine-learned controllers for autonomous systems demonstrate unbeatable performance in scenarios known from training. However, in evolving environments---changing weather or unexpected anomalies---, safety and interpretability remain the greatest challenges for autonomous systems to be reliable and are the urgent scientific challenges. Existing machine-learning approaches focus on recovering lost performance but leave the system open to potential safety violations. Formal methods address this problem by rigorously analysing a smaller representation of the system but they rarely prioritize performance of the controller. We propose to combine insights from formal verification and runtime monitoring with interpretable machine-learning design for guaranteeing reliability of autonomous systems.

#18 Towards Safe and Resilient Autonomy in Multi-Robot Systems [PDF] [Copy] [Kimi]

Author: Wenhao Luo

In the near future, autonomous systems such as multi-robot systems are envisioned to increasingly co-exist with hu- mans in our daily lives, from household service to large- scale warehouse logistics, agriculture environment sampling, and smart city. In these applications, robots and humans as networked heterogeneous components will frequently inter- act with each other in a variety of scenarios under uncer- tain, rapidly-changing, and possibly hostile environment. On one hand, harmonious interactions among robots, as well as between robots and humans, would require safe integration (e.g. collision-free close-proximity interactions) of heteroge- neous robots, human, and human-robot autonomy. On the other hand, reliable interactions among autonomous multi- robot systems often call for resilient system integrity (e.g. communication capability with potential robot failures) to re- tain its capability of accomplishing complex tasks through coordinated behaviors. In the proposed talk, I will discuss our recent works towards safe autonomy and resilient autonomy that aim to facilitate correct-by-design robotic behaviors in a variety of applications.

#19 Monitoring and Intervening on Large Populations of Weakly Coupled Processes with Social Impact Applications [PDF] [Copy] [Kimi]

Author: Andrew Perrault

Many real-world sequential decision problems can be decomposed into processes with independent dynamics that are coupled via the action structure. We discuss recent work on such problems and future directions.

#20 Internal Robust Representations for Domain Generalization [PDF] [Copy] [Kimi]

Author: Mohammad Rostami

Model generalization under distributional changes remains a significant challenge for machine learning. We present consolidating the internal representation of the training data in a model as a strategy of improving model generalization.

#21 Planning and Learning for Reliable Autonomy in the Open World [PDF] [Copy] [Kimi]

Author: Sandhya Saisubramanian

Safe and reliable decision-making is critical for long-term deployment of autonomous systems. Despite the recent advances in artificial intelligence, ensuring safe and reliable operation of human-aligned autonomous systems in open-world environments remains a challenge. My research focuses on developing planning and learning algorithms that support reliable autonomy in fully and partially observable environments, in the presence of uncertainty, limited information, and limited resources. This talk covers a summary of my recent research towards reliable autonomy.

#22 Dynamics of Cooperation and Conflict in Multiagent Systems [PDF] [Copy] [Kimi]

Author: Fernando P. Santos

Meeting today’s major scientific and societal challenges requires understanding the dynamics of cooperation, coordination, and conflict in complex adaptive systems (CAS). Artificial Intelligence (AI) is intimately connected with these challenges, both as an application domain and as a source of new computational techniques: On the one hand, AI suggests new algorithmic recommendations and interaction paradigms, offering novel possibilities to engineer cooperation and alleviate conflict in multiagent (hybrid) systems; on the other hand, new learning algorithms provide improved techniques to simulate sophisticated agents and increasingly realistic CAS. My research lies at the interface between CAS and AI: I develop computational methods to understand cooperation and conflict in multiagent systems, and how these depend on systems’ design and incentives. I focus on mapping interaction rules and incentives onto emerging macroscopic patterns and long-term dynamics. Examples of this research agenda, that I will survey in this talk, include modelling (1) the connection between reputation systems and cooperation dynamics, (2) the role of agents with hard-coded strategies in stabilizing fair behaviors in a population, or (3) the impact of recommendation algorithms on potential sources of conflict (e.g., radicalization and polarization) in a system composed of adaptive agents influencing each other over time.

#23 Combating Disinformation on Social Media and Its Challenges: A Computational Perspective [PDF] [Copy] [Kimi]

Author: Kai Shu

The use of social media has accelerated information sharing and instantaneous communications. The low barrier to entering social media enables more users to participate and keeps them engaged longer, incentivizing individuals with a hidden agenda to spread disinformation online to manipulate information and sway opinion. Disinformation, such as fake news, hoaxes, and conspiracy theories, has increasingly become a hindrance to the functioning of online social media as an effective channel for trustworthy information. Therefore, it is imperative to understand disinformation and systematically investigate how to improve resistance against it. This article highlights relevant theories and recent advancements of detecting disinformation from a computational perspective, and urges the need for future interdisciplinary research.

#24 Human-Aware AI – A Foundational Framework for Human-AI Interaction [PDF] [Copy] [Kimi]

Author: Sarath Sreedharan

We are living through a revolutionary moment in AI history. We are seeing the development of impressive new AI systems at a rate that was unimaginable just a few years ago. However, AI's true potential to transform society remains unrealized, in no small part due to the inability of current systems to work effectively with people. A major hurdle to achieving such coordination is the inherent asymmetry between the AI system and its users. In this talk, I will discuss how the framework of Human-Aware AI (HAAI) provides us with the tools required to bridge this gap and support fluent and intuitive coordination between the AI system and its users.

#25 Towards Unified, Explainable, and Robust Multisensory Perception [PDF] [Copy] [Kimi]

Author: Yapeng Tian

Humans perceive surrounding scenes through multiple senses with multisensory integration. For example, hearing helps capture the spatial location of a racing car behind us; seeing peoples' talking faces can strengthen our perception of their speech. However, today's state-of-the-art scene understanding systems are usually designed to rely on a single audio or visual modality. Ignoring multisensory cooperation has become one of the key bottlenecks in creating intelligent systems with human-level perception capability, which impedes the real-world applications of existing scene understanding models. To address this limitation, my research has pioneered marrying computer vision with computer audition to create multimodal systems that can learn to understand audio and visual data. In particular, my current research focuses on asking and solving fundamental problems in a fresh research area: audio-visual scene understanding and strives to develop unified, explainable, and robust multisensory perception machines. The three themes are distinct yet interconnected, and all of them are essential for designing powerful and trustworthy perception systems. In my talk, I will give a brief overview about this new research area and then introduce my works in the three research thrusts.